Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add filters

Database
Language
Document Type
Year range
1.
J Am Med Inform Assoc ; 2022 Oct 10.
Article in English | MEDLINE | ID: covidwho-2325500

ABSTRACT

OBJECTIVE: Federated learning (FL) allows multiple distributed data holders to collaboratively learn a shared model without data sharing. However, individual health system data are heterogeneous. "Personalized" FL variations have been developed to counter data heterogeneity, but few have been evaluated using real-world healthcare data. The purpose of this study is to investigate the performance of a single-site versus a 3-client federated model using a previously described COVID-19 diagnostic model. Additionally, to investigate the effect of system heterogeneity, we evaluate the performance of 4 FL variations. MATERIALS AND METHODS: We leverage a FL healthcare collaborative including data from 5 international healthcare systems (US and Europe) encompassing 42 hospitals. We implemented a COVID-19 computer vision diagnosis system using the FedAvg algorithm implemented on Clara Train SDK 4.0. To study the effect of data heterogeneity, training data was pooled from 3 systems locally and federation was simulated. We compared a centralized/pooled model, versus FedAvg, and 3 personalized FL variations (FedProx, FedBN, FedAMP). RESULTS: We observed comparable model performance with respect to internal validation (local model: AUROC 0.94 vs FedAvg: 0.95, p = 0.5) and improved model generalizability with the FedAvg model (p < 0.05). When investigating the effects of model heterogeneity, we observed poor performance with FedAvg on internal validation as compared to personalized FL algorithms. FedAvg did have improved generalizability compared to personalized FL algorithms. On average, FedBN had the best rank performance on internal and external validation. CONCLUSION: FedAvg can significantly improve the generalization of the model compared to other personalization FL algorithms; however, at the cost of poor internal validity. Personalized FL may offer an opportunity to develop both internal and externally validated algorithms.

2.
Radiol Artif Intell ; 4(4): e210217, 2022 Jul.
Article in English | MEDLINE | ID: covidwho-1968372

ABSTRACT

Purpose: To conduct a prospective observational study across 12 U.S. hospitals to evaluate real-time performance of an interpretable artificial intelligence (AI) model to detect COVID-19 on chest radiographs. Materials and Methods: A total of 95 363 chest radiographs were included in model training, external validation, and real-time validation. The model was deployed as a clinical decision support system, and performance was prospectively evaluated. There were 5335 total real-time predictions and a COVID-19 prevalence of 4.8% (258 of 5335). Model performance was assessed with use of receiver operating characteristic analysis, precision-recall curves, and F1 score. Logistic regression was used to evaluate the association of race and sex with AI model diagnostic accuracy. To compare model accuracy with the performance of board-certified radiologists, a third dataset of 1638 images was read independently by two radiologists. Results: Participants positive for COVID-19 had higher COVID-19 diagnostic scores than participants negative for COVID-19 (median, 0.1 [IQR, 0.0-0.8] vs 0.0 [IQR, 0.0-0.1], respectively; P < .001). Real-time model performance was unchanged over 19 weeks of implementation (area under the receiver operating characteristic curve, 0.70; 95% CI: 0.66, 0.73). Model sensitivity was higher in men than women (P = .01), whereas model specificity was higher in women (P = .001). Sensitivity was higher for Asian (P = .002) and Black (P = .046) participants compared with White participants. The COVID-19 AI diagnostic system had worse accuracy (63.5% correct) compared with radiologist predictions (radiologist 1 = 67.8% correct, radiologist 2 = 68.6% correct; McNemar P < .001 for both). Conclusion: AI-based tools have not yet reached full diagnostic potential for COVID-19 and underperform compared with radiologist prediction.Keywords: Diagnosis, Classification, Application Domain, Infection, Lung Supplemental material is available for this article.. © RSNA, 2022.

3.
Clin Imaging ; 82: 77-82, 2022 Feb.
Article in English | MEDLINE | ID: covidwho-1574394

ABSTRACT

BACKGROUND: Chest radiographs (CXR) are frequently used as a screening tool for patients with suspected COVID-19 infection pending reverse transcriptase polymerase chain reaction (RT-PCR) results, despite recommendations against this. We evaluated radiologist performance for COVID-19 diagnosis on CXR at the time of patient presentation in the Emergency Department (ED). MATERIALS AND METHODS: We extracted RT-PCR results, clinical history, and CXRs of all patients from a single institution between March and June 2020. 984 RT-PCR positive and 1043 RT-PCR negative radiographs were reviewed by 10 emergency radiologists from 4 academic centers. 100 cases were read by all radiologists and 1927 cases by 2 radiologists. Each radiologist chose the single best label per case: Normal, COVID-19, Other - Infectious, Other - Noninfectious, Non-diagnostic, and Endotracheal Tube. Cases labeled with endotracheal tube (246) or non-diagnostic (54) were excluded. Remaining cases were analyzed for label distribution, clinical history, and inter-reader agreement. RESULTS: 1727 radiographs (732 RT-PCR positive, 995 RT-PCR negative) were included from 1594 patients (51.2% male, 48.8% female, age 59 ± 19 years). For 89 cases read by all readers, there was poor agreement for RT-PCR positive (Fleiss Score 0.36) and negative (Fleiss Score 0.46) exams. Agreement between two readers on 1638 cases was 54.2% (373/688) for RT-PCR positive cases and 71.4% (679/950) for negative cases. Agreement was highest for RT-PCR negative cases labeled as Normal (50.4%, n = 479). Reader performance did not improve with clinical history or time between CXR and RT-PCR result. CONCLUSION: At the time of presentation to the emergency department, emergency radiologist performance is non-specific for diagnosing COVID-19.


Subject(s)
COVID-19 , Adult , Aged , COVID-19 Testing , Emergency Service, Hospital , Female , Humans , Male , Middle Aged , Radiography, Thoracic , Radiologists , Retrospective Studies , SARS-CoV-2
4.
AJR Am J Roentgenol ; 215(6): 1411-1416, 2020 12.
Article in English | MEDLINE | ID: covidwho-976134

ABSTRACT

OBJECTIVE. In recent decades, teleradiology has expanded considerably, and many radiology practices now engage in intraorganizational or extraorganizational teleradiology. In this era of patient primacy, optimizing patient care and care delivery is paramount. This article provides an update on recent changes, current challenges, and future opportunities centered around the ability of teleradiology to improve temporal and geographic imaging access. We review licensing and regulations and discuss teleradiology in providing services to rural areas and assisting with disaster response, including the response to the coronavirus disease (COVID-19) pandemic. CONCLUSION. Teleradiology can help increase imaging efficiency and mitigate both geographic and temporal discrepancies in imaging care. Technologic limitations and regulatory hurdles hinder the optimal practice of teleradiology, and future attention to these issues may help ensure broader patient access to high-quality imaging across the United States.


Subject(s)
COVID-19/epidemiology , Teleradiology/trends , Confidentiality , Humans , Licensure, Medical , Physical Distancing , SARS-CoV-2 , United States/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL